Session 3-G

Slicing and Virtualization

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 8 Wed, 6:00 AM — 7:30 AM PDT

AZTEC: Anticipatory Capacity Allocation for Zero-Touch Network Slicing

Dario Bega (IMDEA Networks, Spain); Marco Gramaglia (Universidad Carlos III de Madrid, Spain); Marco Fiore (IMDEA Networks Institute, Spain); Albert Banchs (Universidad Carlos III de Madrid, Spain); Xavier Costa-Perez (NEC Laboratories Europe, Germany)

6
The combination of network softwarization with network slicing enables the provisioning of very diverse services over the same network infrastructure. However, it also creates a complex environment where the orchestration of network resources cannot be guided by traditional, human-in-the-loop network management approaches. New solutions that perform these tasks automatically and in advance are needed, paving the way to zero-touch network slicing. In this paper, we propose AZTEC, a data-driven framework that effectively allocates capacity to individual slices by adopting an original multi-timescale forecasting model. Hinging on a combination of Deep Learning architectures and a traditional optimization algorithm, AZTEC anticipates resource assignments that minimize the comprehensive management costs induced by resource overprovisioning, instantiation and reconfiguration, as well as by denied traffic demands. Experiments with real-world mobile data traffic show that AZTEC dynamically adapts to traffic fluctuations, and largely outperforms state-of- the-art solutions for network resource orchestration.

OKpi: All-KPI Network Slicing Through Efficient Resource Allocation

Jorge Martín-Pérez (Universidad Carlos III de Madrid, Spain); Francesco Malandrino (CNR-IEIIT, Italy); Carla Fabiana Chiasserini (Politecnico di Torino, Italy); Carlos J. Bernardos (Universidad Carlos III de Madrid, Spain)

2
Networks can now process data as well as transporting it; it follows that they can support multiple services, each requiring different key performance indicators (KPIs). Because of the former, it is critical to efficiently allocate network and computing resources to provide the required services, and, because of the latter, such decisions must jointly consider all KPIs targeted by a service. Accounting for newly introduced KPIs (e.g., availability and reliability) requires tailored models and solution strategies, and has been conspicuously neglected by existing works, which are instead built around traditional metrics like throughput and latency. We fill this gap by presenting a novel methodology and resource allocation scheme, named OKpi, which enables high-quality selection of radio points of access as well as VNF (Virtual Network Function) placement and data routing, with polynomial computational complexity. OKpi accounts for all relevant KPIs required by each service, and for any available resource from the fog to the cloud. We prove several important properties of OKpi and evaluate its performance in two real-world scenarios, finding it to closely match the optimum.

Elastic Network Virtualization

Max Alaluna and Nuno Ferreira Neves (University of Lisbon, Portugal); Fernando M. V. Ramos (University of Lisboa, Portugal)

2
Core to network virtualization is the embedding of virtual networks onto the underlying substrate. Existing approaches are not suitable for cloud environments as they lack its most fundamental requirement: elasticity. To address this issue we explore the capacity of flexibly changing the topology of a virtual network by proposing a VNE solution that adds elasticity to the tenant's virtual infrastructures. For this purpose, we introduce four primitives to tenants' virtual networks -- including scale in and scale out -- and propose new algorithms to materialize them. The main challenge is to enable these new services while maximizing resource efficiency and without impacting service quality. Instead of further improving existing online embedding algorithms -- always limited by the inability to predict future demand -- we follow a radically different approach. Specifically, we leverage network migration for our embedding procedures and to introduce a new reconfiguration primitive for the infrastructure provider. As migration introduces network churn, our solution uses this technique parsimoniously, to limit the impact to running services. We show our solution to achieve efficiencies that are on par with the state-of-the-art solution that fully reconfigures the substrate network while reducing the migration footprint by at least one order of magnitude.

Letting off STEAM: Distributed Runtime Traffic Scheduling for Service Function Chaining

Marcel Blöcher (Technische Universität Darmstadt, Germany); Ramin Khalili (Huawei Technologies, Germany); Lin Wang (VU Amsterdam & TU Darmstadt, The Netherlands); Patrick Eugster (Università della Svizzera Italiana (USI), Switzerland & Purdue University, TU Darmstadt, SensorHound Inc., USA)

4
Network function virtualization has introduced a high degree of flexibility for orchestrating service functions. The provisioning of chains of service functions requires making decisions on both (1) placement of service functions and (2) scheduling of traffic through them. The placement problem (1) can be tackled during the planning phase, by exploiting coarse-grained traffic information, and has been studied extensively. However, runtime traffic scheduling (2) for optimizing system utilization and service quality, as required for future edge cloud and mobile carrier scenarios, has not been addressed so far. We fill this gap by presenting a queuing-based system model to characterize the runtime traffic scheduling problem for service function chaining. We propose a throughput-optimal scheduling policy, called integer allocation maximum pressure policy (IA-MPP). To ensure practicality in large distributed settings, we propose multi-site cooperative IA-MPP (STEAM), fulfilling run-time requirements while achieving near-optimal performance. We examine our policies in various settings representing real-world scenarios. We observe that STEAM closely matches IA-MPP, in terms of throughput, and significantly outperforms (possible adaptation of) existing static or coarse-grained dynamic solutions, by requiring 30%-60% less server capacity for similar service quality. Our STEAM prototype shows feasibility running on a standard server.

Session Chair

Xiaojun Cao (Georgia State University)

Session 4-G

Caching II

Conference
11:00 AM — 12:30 PM EDT
Local
Jul 8 Wed, 8:00 AM — 9:30 AM PDT

On the Economic Value of Mobile Caching

Yichen Ruan and Carlee Joe-Wong (Carnegie Mellon University, USA)

1
Recent growth in user demand for mobile data has strained mobile network infrastructure. One possible solution is to use mobile (i.e., moving) devices to supplement existing infrastructure according to users' needs at different times and locations. However, it is unclear how much value these devices add relative to their deployment costs: they may, for instance, interfere with existing network infrastructure, limiting the potential benefits. We take the first step towards quantifying the value of this supplemental infrastructure by examining the use case of mobile caches. We consider a network operator using both mobile (e.g., vehicular) and stationary (small cell) caches, and find the optimal amount of both cache types under time- and location-varying user demands, as a function of the cache prices. In doing so, we account for interference between users' connections to the different caches, which requires solving a non-convex optimization problem. We show that there exists a threshold price above which no vehicular caches are purchased. Moreover, as the network operator's budget increases, vehicular caching yields little additional value beyond that provided by small cell caches. These results may help network operators and cache providers find conditions under which vehicles add value to existing networks.

RepBun: Load-Balanced, Shuffle-Free Cluster Caching for Structured Data

Minchen Yu (Hong Kong University of Science and Technology, Hong Kong); Yinghao Yu (The Hong Kong University of Science and Technology, Hong Kong); Yunchuan Zheng, Baichen Yang and Wei Wang (Hong Kong University of Science and Technology, Hong Kong)

0
Cluster caching systems increasingly store structured data objects in the columnar format. However, these systems routinely face the imbalanced load that significantly impairs the I/O performance. Existing load-balancing solutions, while effective for reading unstructured data objects, fall short in handling columnar data. Unlike unstructured data that can only be read through a full-object scan, columnar data supports direct query of specific columns with two distinct access patterns: (1) columns have the heavily skewed popularity, and (2) hot columns are likely accessed together in a query job. Based on these two access patterns, we propose an effective load-balancing solution for structured data. Our solution, which we call RepBun, groups hot columns into a bundle. It then copies multiple replicas of the column bundle and stores them uniformly across servers. We show that RepBun achieves improved load balancing with reduced memory overhead, while avoiding data shuffling between cache servers. We implemented RepBun atop Alluxio, a popular in-memory distributed storage, and evaluate its performance through EC2 deployment against the TPC-H benchmark workload. Experimental results show that RepBun outperforms the existing load-balancing solutions with significantly shorter read latency and faster query completion.

RePiDem: A Refined POI Demand Modeling based on Multi-Source Data

Ruiyun Yu, Dezhi Ye and Jie Li (Northeastern University, China)

0
Point-of-Interest (POI) demand modeling in urban areas is critical for building smart cities with various applications, e.g., business location selection and urban planning. However, existing work does not fully utilize human mobility data and ignores the interactive-aware information. In this work, we design a Refined POI Demand Modeling Framework named RePiDeM, to identify people's daily demands based on human mobility data and interaction information between people and POIs. Specifically, we introduce a Measurement Report (MR) travel inference algorithm to estimate the POI visiting probability based on human mobility from cellular signal data and POI features from online reviews. Further, to address the data sparsity issue, we design a Multi-source Attention Neural Collaborative Filtering (MANCF) model to simulate the access of missing POIs, which can capture the varying aspect attentions that different regions paid to different POIs. We conduct extensive experiments on real-world data collected in Chinese city Shenyang, which show that ERPDIM is effective for modeling region POI demands.

Universal Online Sketch for Tracking Heavy Hitters and Estimating Moments of Data Streams

Qingjun Xiao (SouthEast University of China, China); Zhiying Tang (Southeast University, China); Shigang Chen (University of Florida, USA)

1
Traffic measurement is key to many network management tasks such as performance monitoring and cyber-security. For processing fast packet stream in size-limited SRAM of line cards, many space-sublinear algorithms have been proposed, such as CountMin and CountSketch. However, most of them are designed for specific measurement tasks. Implementing multiple independent sketches places burden for online operations of a network device. It is highly desired to design a universal sketch that not only tracks individual large flows (called heavy hitters) but also reports overall traffic distribution statistics (called moments). The prior work UnivMon successfully tackled this ambitious quest. However, it incurs large and variable per-packet processing overhead, which may result in a significant throughput bottleneck in high-rate packet streaming, given that each packet requires 65 hashes and 64 memory accesses on average and many times of that in the worst case. To address this performance issue, we fundamentally redesign the solution architecture from hierarchical sampling to new progressive sampling and from CountSketch to new ActiveCM+, which ensure per-packet overhead is a small constant (4 hashes and 4 memory accesses) in the worst case, making it more suitable for online operations. The new design also improves measurement accuracy under the same memory.

Session Chair

Stratis Ioannidis (Northeastern University)

Session 5-G

SDN I

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 8 Wed, 11:00 AM — 12:30 PM PDT

A Deep Analysis on General Approximate Counters

Tong Yun and Bin Liu (Tsinghua University, China)

1
Approximate counters play an important role in many computer domains like network measurement, parallel computing, and machine learning. With the emergence of new problems in these domains like flow counting and adding approximate counters, the traditionally used simple Morris counter fails to solve them, which requires a more general Morris counter. However, there has been a lack of complete theoretical research on the statistical properties of this approximate counter so far. This paper proposes an analysis on general Morris counters and derives the minimum upper bound of the variance. To our best knowledge, this is the first work to thoroughly analyze the statistical properties of general Morris counters in theory. Besides, practical application scenarios are analyzed to show that our conclusions are practical in testing the performance of approximate counters and guiding the design of architectures. Our proof methods are also general and can be applied to analyzing other scenarios involving either simple Morris counters or their promotional versions.

Efficient and Consistent TCAM Updates

Bohan Zhao, Rui Li and Jin Zhao (Fudan University, China); Tilman Wolf (University of Massachusetts, USA)

0
The dynamic nature of software-defined networking requires frequent updates to the flow table in the data plane of switches. Therefore, the ternary content-addressable memory (TCAM) used in switches to match packet header fields against forwarding rules needs to support high rates of updates. Existing off-the-shelf switches update rules in batches for efficiency but may suffer from forwarding inconsistencies during the batch update. In this paper, we design and evaluate a TCAM update optimization framework that can guarantee consistent forwarding during the entire update process while making use of a layered TCAM structure. Our approach is based on a modified-entry-first write-back strategy that significantly reduces the overhead from movements of TCAM entries. In addition, our approach detects reordering cases, which are handled using efficient solutions. Based on our evaluation results, we can reduce the cost of TCAM updates by 30%-88% compared to state-of-the-art techniques.

Faster and More Accurate Measurement through Additive-Error Counters

Ran Ben Basat (Harvard University, USA); Gil Einziger (Ben-Gurion University Of The Negev, Israel); Michael Mitzenmacher (Harvard University, USA); Shay Vargaftik (VMware, Israel)

3
Network applications such as load balancing, traffic engineering, and intrusion detection often rely on timely traffic measurements, including estimating flow size and identifying the heavy hitter flows. Counter arrays, which provide approximate counts, are a fundamental building block for many measurement algorithms, and current works optimize such arrays for throughput and/or space efficiency.

We suggest a novel sampling technique that reduces the required size of counters and allows more counters to fit within the same space. We formally show that our method yields better space to accuracy guarantees for multiple flavors of measurement algorithms. We also empirically evaluate our technique against several other measurement algorithms on real Internet traces. Our evaluation shows that our method improves the throughput and the accuracy of approximate counters and corresponding measurement algorithms.

Network Monitoring for SDN Virtual Networks

Gyeongsik Yang, Heesang Jin, Minkoo Kang, Gi Jun Moon and Chuck Yoo (Korea University, Korea (South))

3
This paper proposes V-Sight, a network monitoring framework for software-defined networking (SDN)-based virtual networks. Network virtualization with SDN (SDN-NV) makes it possible to realize programmable virtual networks; so, the technology can be beneficial to cloud services for tenants. However, to the best of our knowledge, although network monitoring is a vital prerequisite for managing and optimizing virtual networks, it has not been investigated in the context of SDN-NV. Thus, virtual networks suffer from non-isolated statistics between virtual networks, high monitoring delays, and excessive control channel consumption for gathering statistics, which critically hinders the benefits of SDN-NV. To solve these problems, V-Sight presents three key mechanisms: 1) statistics virtualization for isolated statistics, 2) transmission disaggregation for reduced transmission delay, and 3) pCollector aggregation for efficient control channel consumption. V-Sight is implemented on top of OpenVirteX, and the evaluation results demonstrate that V-Sight successfully reduces monitoring delay and control channel consumption up to 454 times.

Session Chair

Puneet Sharma (Hewlett Packard Labs)

Session 6-G

SDN II

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 8 Wed, 1:00 PM — 2:30 PM PDT

Coeus: Consistent and Continuous Network Update in Software-Defined Networks

Xin He and Jiaqi Zheng (Nanjing University, China); Haipeng Dai (Nanjing University & State Key Laboratory for Novel Software Technology, China); Chong Zhang and Wajid Rafique (Nanjing University, China); Geng Li (Yale University, USA); Wanchun Dou (Nanjing University, China); Qiang Ni (Lancaster University, United Kingdom (Great Britain))

1
Network update enables Software-Defined Networks (SDNs) to optimize the data plane performance via southbound APIs. Single update between the initial and the final network state fails to handle high-frequency changes or the burst event during the update procedure in time, leading to prolonged update time and inefficiency. On the contrary, the continuous update can respond to the network condition changes at all times. However, existing work, especially "Update Algebra" can only guarantee blackhole- and loop-free. The congestion-free property cannot be respected during the update procedure. In this paper, we propose Coeus, a continuous network update system while maintaining blackhole-, loop- and congestion-free simultaneously. First, we establish an operation-based continuous update model. Based on this model, we dynamically reconstruct an operation dependency graph to capture unexecuted update operations and the link utilization variations. Then, we develop an operation composition algorithm to eliminate redundancy update commands and an operation node partition algorithm to speed up the update time. We prove that the partition algorithm is optimal and can guarantee consistency. Finally, extensive evaluations show that Coeus can improve the makespan by at least 179% compared with state-of-the-art when the arrival rate of update events equal to three times per second.

Flow Table Security in SDN: Adversarial Reconnaissance and Intelligent Attacks

Mingli Yu (Pennsylvania State University, USA); Ting He (Penn State University, USA); Patrick McDaniel (Pennsylvania State University, USA); Quinn Burke (Pennsylvania State Univerisity, USA)

2
The performance-driven design of SDN architectures leaves many security vulnerabilities, a notable one being the communication bottleneck between the controller and the switches. Functioning as a cache between the controller and the switches, the flow table mitigates this bottleneck by caching flow rules received from the controller at each switch, but is very limited in size due to the high cost and power consumption of the underlying storage medium. It thus presents an easy target for attacks. Observing that many existing defenses are based on simplistic attack models, we develop a model of intelligent attacks that exploit specific cache-like behaviors of the flow table to infer its internal configuration and state, and then design attack parameters accordingly. Our evaluations show that such attacks can accurately expose the internal parameters of the target flow table and cause measurable damage with the minimum effort.

Toward Optimal Software-Defined Interdomain Routing

Qiao Xiang (Yale University, USA); Jingxuan Zhang (Tongji University, China); Kai Gao (Sichuan University, China); Yeon-sup Lim (IBM T. J. Watson Research Center, USA); Franck Le (IBM T. J. Watson, USA); Geng Li and Y. Richard Yang (Yale University, USA)

2
End-to-end route control spanning a set of networks provides substantial benefits and business opportunities to network operators and end users. BGP, the de facto interdomain routing protocol, provides no programmable control. Recent proposals for interdomain control, e.g., ARROW and SDX, provide more mechanisms and interfaces, but they are only either point or incremental solutions. In this paper, we provide the first, systematic formulation of the software-defined internetworking (SDI) problem, in which each one of a set of participating interdomain networks exposes a programmable interface to allow a logically centralized client to define the interdomain route of each network, just as a traditional SDN client defines the next-hop output port of a traditional SDN switch, extending SDN from intra-domain control to generic interdomain control. We conduct rigorous analysis to show that the problem of finding the optimal end-to-end route for SDI is NP-hard. We develop a blackbox optimization algorithm, which leverages Bayesian optimization theory and important properties of interdomain routing algebra, to sample end-to-end routes sequentially and find a near-optimal policy-compliant end-to-end route with a small number of samples. We implement a prototype of our optimization algorithm and validate its efficiency and efficacy via extensive experiments using real interdomain network topologies.

Towards Latency Optimization in Hybrid Service Function Chain Composition and Embedding

Danyang Zheng, Chengzong Peng and Xueting Liao (Georgia State University, USA); Ling Tian and Guangchun Luo (University of Electronic Science and Technology of China, China); Xiaojun Cao (Georgia State University, USA)

2
In Network Function Virtualization (NFV), to satisfy the Service Functions (SFs) requested by a customer, service providers will composite a Service Function Chain (SFC) and embed it onto the shared Substrate Network (SN). For many latency-sensitive and computing-intensive applications, the customer forwards data to the cloud/server and the cloud/server sends the results/models back, which may require different SFs to handle the forward and backward traffic. The SFC that requires different SFs in the forward and backward directions is referred to as hybrid SFC (h-SFC). In this paper, we, for the first time, comprehensively study how to optimize the latency cost in Hybrid SFC composition and Embedding (HSFCE). When each substrate node provides only one unique SF, we prove the NP-hardness of HSFCE and propose the first 2-approximation algorithm to jointly optimize the processes of h-SFC construction and embedding, which is called Eulerian Circuit based Hybrid SFP optimization (EC-HSFP). When a substrate node provides various SFs, we extend EC-HSFP and propose the efficient Betweenness Centrality based Hybrid SFP optimization (BC-HSFP) algorithm. Our extensive simulations and analysis show that EC-HSFP can hold the 2-approximation, while BC-HSFP outperforms the algorithms directly extended from the state-of-art techniques by an average of 20%.

Session Chair

Y. Richard Yang (Yale University)

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.